Accessibility settings

Published on in Vol 12 (2026)

This is a member publication of University of Nottingham (Jisc)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/87012, first published .
Use of Digital Technology for Developing Communication Skills in Undergraduate and Postgraduate Medical Education: Scoping Review

Use of Digital Technology for Developing Communication Skills in Undergraduate and Postgraduate Medical Education: Scoping Review

Use of Digital Technology for Developing Communication Skills in Undergraduate and Postgraduate Medical Education: Scoping Review

1Primary Care Education Unit, Centre for Academic Primary Care, School of Medicine, University of Nottingham, Room C37, C Floor, The Medical School, Queens Medical Centre, Nottingham, United Kingdom

2Education Centre, School of Medicine, University of Nottingham, Nottingham, United Kingdom

3Centre for Public Health and Epidemiology, School of Medicine, University of Nottingham, Nottingham, United Kingdom

4Centre for Evidence Based Healthcare, Faculty of Medicine and Health Sciences, University of Nottingham, Nottingham, United Kingdom

Corresponding Author:

Jaspal Taggar, MBChB, MSc, PhD


Background: Effective doctor-patient communication is fundamental to safe, high-quality health care and is a core competency across undergraduate and postgraduate medical education. Communication skills training (CST) has traditionally relied on workforce-intensive methods such as role-play and standardized patient encounters, which face increasing pressure from rising student numbers, constrained faculty capacity, and growing clinical workloads. Digital technologies offer scalable, flexible alternatives, yet the extent, educational design, and strength of evidence supporting digital CST remain unclear.

Objective: This study aimed to comprehensively map the digital technologies used for CST in undergraduate and postgraduate medical education, examine the reported outcomes in the context of educational theory, and identify gaps relevant for future research and clinical practice.

Methods: This scoping review followed Joanna Briggs Institute (JBI) methodology and is reported in accordance with PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. Four electronic databases (Medline, Embase, CINAHL, ERIC) were searched from inception to January 5, 2026. Eligible studies examined any digital technology used to support active, 2-way CST for undergraduate or postgraduate medical learners. Passive learning approaches were excluded. Data were synthesized descriptively. To support structured interpretation of heterogeneous outcomes, interventions were mapped to Kolb’s experiential learning cycle to examine learning processes and to Kirkpatrick’s evaluation model to assess depth of educational and translational impact.

Results: A total of 11,179 records were identified, of which 121 studies met the inclusion criteria. Most studies were published within the past decade (92/121, 76%) and were conducted in North America and Europe (93/121, 76.9%), with 58.7% (71/121) of studies focusing on undergraduate learners. Recording-based tools (51/121, 41.8%), live stream platforms (33/121, 27%), and virtual patient simulators (32/121, 26.2%) were the most used digital technologies. General communication and history taking was the most frequent topic taught. Only 28.1% (34/121) of studies used validated objective outcome measures. Educationally, digital interventions overwhelmingly supported early stages of experiential learning (120/121, 99.2%), with almost no progression to abstract conceptualization or active experimentation. Outcome evaluation was similarly limited in depth; most studies assessed outcomes at Kirkpatrick Levels 1 and 2. Few studies evaluated behavior change in clinical practice (6/121, 5%) or patient-level outcomes (1/121, 0.8%). A small but growing subset of studies incorporated artificial intelligence, primarily within virtual patient simulators, showing promising but methodologically limited evidence.

Conclusions: Although digital CST interventions show promise for supporting early-stage learning outcomes, the evidence is constrained by weak study designs, inconsistent use of validated measures, and minimal real-world evaluation. Current technologies support only initial phases of experiential learning, with no evidence of progression to competency development or translation into improved patient care. For educators investing in digital CST, these technologies should be integrated thoughtfully within broader curricula rather than treated as standalone solutions, accompanied by evaluation extending to clinical outcomes. Future research that prioritizes robust comparative designs evaluating whether digital training meaningfully improves clinical communication and patient care is warranted.

JMIR Med Educ 2026;12:e87012

doi:10.2196/87012

Keywords



Background

Effective doctor-patient communication is fundamental to the delivery of safe and high-quality care. Strong communication enhances patient outcomes by building trust, collaborative decision-making, improved treatment adherence, and better self-care [1]. This importance is reflected in empirical evidence showing that poorer physician communication is associated with higher risks of patient nonadherence, while communication skills training (CST) is linked to meaningful improvements in adherence and other patient-relevant outcomes [2]. Furthermore, interpersonal communication is a core graduate competency for medical training programs globally, and the need to develop and refine communication skills throughout a medical career is recognized as a critical professional standard [3,4]. However, many communication skills are not acquired spontaneously during medical training and require training for their development [5].

Methods of teaching communication skills during medical education have traditionally relied on didactic lectures, role-playing, and standardized patient interactions facilitated by more senior clinicians [6]. These are workforce-intensive methods, particularly when personalized, specific feedback is provided, which has been shown to be effective in improving skill acquisition [6]. Pressures on effective communication skills teaching are further compounded by increasing medical student numbers [7], global shortage of health professionals [8,9], and rising clinical workloads among supervisors [10].

Digital technologies have therefore been increasingly adopted in medical education as a means of addressing these structural constraints. Unlike traditional communication skills teaching, which depends on synchronous, staff-intensive delivery, digital approaches such as virtual patient (VP) simulators and online modules require substantial upfront development but can subsequently be reused at scale, offer flexible and repeated access, and enable learners to practice a wider range of communication scenarios than is feasible in conventional formats [11-13]. These approaches support self-directed learning and allow simulation of interactions that are difficult to stage with standardized patients, including communication involving rare or complex clinical presentations. As a result, digital technologies are increasingly viewed as a means of expanding training capacity and equity while reducing reliance on faculty-intensive delivery and preserving patient safety [11].

There has been a rapid acceleration in the use of digital technology within medical education since 2018, particularly following the shift toward virtual learning environments during the COVID-19 pandemic [14,15], though its application specifically for teaching and learning communication skills remains less well evidenced. Emerging digital modalities, including immersive virtual reality and artificial intelligence (AI)–supported interventions, are increasingly being explored for CST, reflecting a widening technological landscape beyond conventional online or recording-based approaches [16]. Previous reviews examining digital approaches to CST in medical education have tended to adopt narrow foci. For example, Kyaw et al [11] restricted inclusion to randomized controlled trials and focused primarily on knowledge and skills outcomes without examining learning processes. Similarly, Fernández-Alcántara et al [17] concentrated exclusively on virtual simulation tools. Reviews of emerging technologies such as AI [18] have similarly focused on specific modalities rather than examining outcomes across the broader landscape of digital approaches. Of particular importance in the context of clinical education, no previous research has systematically examined how digital interventions align with established educational learning theories or assessed whether evaluation extends beyond educational settings to measure impact on clinical practice.

Educational theory provides a structured lens for interpreting heterogeneous outcomes across diverse digital interventions, enabling appraisal not only of whether learning occurs but how learning is supported and whether it progresses beyond individual knowledge acquisition toward behavioral and practice-level impact [19]. Without such theoretical grounding, the evidence base remains fragmented by technology type rather than organized around learning mechanisms and translational relevance. This scoping review is informed by two well-established theoretical frameworks in medical education that have not previously been integrated within CST. Kolb’s experiential learning cycle [20] is particularly relevant to CST as it captures how learners develop communication competencies through concrete experience, reflection, conceptualization, and experimentation. Kirkpatrick’s evaluation model [21] complements this by providing a framework to assess the depth and real-world impact of interventions, from learner reactions through to practice-level outcomes. Together, these frameworks provide a dual-lens approach that bridges individual learning processes with clinical translation.

Research Aim

This scoping review mapped digital technologies used to develop clinical communication competencies in undergraduate and postgraduate medical education and examined reported outcomes through a dual lens of pedagogical theories for clinical education, notably experiential learning and translational impact.

Research Objectives

Our overarching aim was achieved by the following objectives:

  1. To characterize digital technologies used for developing communication skills in undergraduate and postgraduate medical education, including technology modalities, their educational context, learner outcomes, and how their use has evolved over time
  2. To examine how digital communication skills interventions align with experiential learning processes and the depth of educational impact, as mapped using Kolb’s experiential learning cycle and Kirkpatrick’s evaluation model
  3. To identify evidence gaps in the design, evaluation, and reporting of digital communication skills interventions in relation to behavioral change and clinical practice outcomes

Study Design

This scoping review was conducted using the Joanna Briggs Institute (JBI) methodological framework [22] and reported adhering to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) [23]. The PRISMA-ScR checklist is included in Checklist 1. This protocol was prospectively registered with Open Science Framework [24].

Research Question

We defined our research question as “What is known in the existing literature about the use of digital technologies for CST in undergraduate and postgraduate medical education, and to what extent do reported outcomes reflect experiential learning processes and extend beyond individual learning toward behavioral change and clinical practice-level impact?”

Eligibility Criteria

Studies were included if they met the criteria outlined in the following sections.

Participants

Participants were learners participating in medical education. For the purposes of this review, we referred to learners as either undergraduates or postgraduates.

Undergraduates were defined as medical students at the preregistration level (ie, individuals enrolled in a medical degree program who have not yet obtained a license or registration to practice).

Postgraduates were defined as medical doctors at the postregistration level, including those in internship, residency, specialty training, or other forms of postgraduate education and continuing professional development.

Concept

The concept involved the delivery of CST through digital and technology-enhanced methods. Communication skills included content (what is said), process (how the communication is done), and perceptual (underlying thoughts, emotions, and clinical reasoning) skills as described by Kurtz et al [25]. For this review, we further defined communication skills as the capacity for at least 2-way, dynamic interaction between the learner and a patient (or patient’s representative) to convey health care information between the parties involved. Passive interactions, such as watching a premade video, were excluded.

For the purposes of this review, a patient was defined as either a real human patient, a trained actor (for example, a standardized patient), or a digital or simulated representation designed to support the communication of health care information. Digital technologies were defined as digital systems, tools, or platform devices that use electronic data processing to support the delivery, practice, or assessment of communication skills through active learner engagement, including interactive, simulated, or feedback-enabled educational activities. Studies describing only the assessment of communication skills (eg, in Objective Structured Clinical Examination situations) were excluded.

Context

The context included any setting where communication skills were taught, including academic institutions, clinical environments, and synchronous or asynchronous simulation-based scenarios. The interactions were either real-world or role-play encounters.

Evidence Types

All primary study designs were included to gain comprehensive evidence of the topic. We excluded secondary sources such as systematic reviews, scoping reviews, and meta-analyses, as well as nonresearch publications including study protocols, expert opinion, discussion papers, letters, comments, editorials, and book chapters. Conference abstracts without accompanying full-text publications were excluded due to insufficient detail for data extraction. Primary studies cited within relevant reviews were extracted and assessed for eligibility.

Search Strategy

An initial scoping search in Medline (OVID) identified relevant index terms and keywords, which informed the development of a comprehensive search strategy. The strategy incorporated both keywords and controlled vocabulary across three core concept areas: medical education (undergraduate and postgraduate), digital technologies (VPs, simulation, online learning, AI, virtual reality), and communication skills, combined using Boolean operators. The search strategy was adapted from a previously published scoping review on digital education and communication skills [11] and refined to align with this review’s inclusion criteria. During development, the strategy was reviewed in consultation with a research librarian and refined following feedback from the review team, including methodological experts. The final strategy was tested iteratively in Medline and adapted across all selected databases (see Multimedia Appendix 1). Reporting of the search strategy follows the PRISMA-S (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Literature Searches) checklist [26] (Checklist 2).

The electronic databases Medline (via Ovid), Embase (via Ovid), Educational Resource Information Center (ERIC; via EBSCO), and Cumulative Index to Nursing and Allied Health Literature (CINAHL; via EBSCO) were searched from inception to January 5, 2026. Searches were initially conducted in November 2024, re-run in August 2025, and subsequently updated on January 5, 2026, using the same search strategy to identify newly published studies. No changes were made to the search terms or eligibility criteria across the three search runs. Reference lists of included articles and relevant reviews were hand-searched for additional studies.

We imposed no language restrictions on any of the searches. Non-English texts were translated using DeepL Translator software [27]. A standardized quality assessment tool was not used in this review, as the primary aim was to map existing research rather than conduct a detailed critical assessment of individual studies.

Selection of Sources of Evidence

Identified studies were uploaded to Covidence [28] and automatically deduplicated. All titles and abstracts were screened independently by two reviewers (HE, PS, or ET), with discrepancies resolved by group discussion with a fourth reviewer (JT). Full texts of potentially eligible studies were screened independently by two reviewers (HE, PS, JC, or BP) with discrepancies resolved through discussion with the whole research group.

Data Charting Process

Data on study characteristics, learner population and training context, digital technology modality and intervention design features, communication domains addressed, outcome measures and effect direction, use of educational theory, and authors’ conclusions were extracted into a piloted spreadsheet (Multimedia Appendix 2). Data from 10% of included studies were extracted independently by two reviewers (HE, PS, ET, or BP), with findings compared and discussed with the research group. After a high level of agreement was achieved, remaining studies were extracted by one reviewer (HE or PS).

Synthesis of Results

A descriptive analysis was conducted and reported using frequencies and percentages. Studies were categorized by their primary mode of delivery and interaction for developing communication. To support structured synthesis of heterogeneous outcomes, we applied two complementary frameworks:

  1. Kolb’s experiential learning cycle [20]: Interventions were assessed to identify which stages of experiential learning were explicitly supported (where interventions supported more than one stage of the cycle, all relevant stages were recorded):
    • Stage 1 (concrete experience): direct patient/scenario interaction
    • Stage 2 (reflective observation): guided reflection on experiences
    • Stage 3 (abstract conceptualization): synthesis of principles from experiences
    • Stage 4 (active experimentation): application of learning in new contexts
  2. Kirkpatrick’s evaluation model [21]: Study outcomes were mapped to increasing levels of evaluation:
    • Level 1: learner satisfaction or reactions only
    • Level 2: improvement in knowledge or communication skills acquisition
    • Level 3: behavior change in clinical practice
    • Level 4: patient or organizational outcomes

Kirkpatrick’s evaluation model was selected because it is widely recognized in medical education research and enables comparison of outcome depth across diverse intervention types and study designs [29].

The use of these two frameworks allowed consistent appraisal of how interventions were pedagogically structured to support learning processes as well as the depth of outcome assessment. As neither framework was explicitly referenced in included studies, classifications were assigned by authors. Two reviewers (PS and HE) independently conducted all framework assignments, with discrepancies resolved through discussion. This approach enabled systematic comparison across technologies and identification of patterns and evidence gaps in learning processes and outcome depth.


Selection of Sources of Evidence

A total of 11,179 records were identified through database searches, and 1 study was identified via citation searching. Following the removal of 3207 duplicate records, 7973 records were screened. Of these, 341 full-text articles were sought for retrieval, with 336 successfully obtained. After full-text review, 220 studies were excluded based on the predefined eligibility criteria. The reasons for exclusion were incorrect intervention (n=133), irrelevant outcomes (n=38), ineligible participant population (n=19), unsuitable article format (n=16), uncertainty regarding the intervention (n=6), inability to retrieve paper (n=5), and duplicate study (n=1). A total of 121 studies were included in the final synthesis, as detailed in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram (Figure 1). A summary of all included studies is provided in Multimedia Appendix 3 [30-150].

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram showing the identification, screening, eligibility, and inclusion of studies using digital technology for developing communication skills in undergraduate and postgraduate medical education.

Characteristics of Sources of Evidence

Table 1 summarizes key characteristics of the included studies. The earliest studies were published prior to 2000, with relatively low publication activity until the mid-2010s. Most studies were published between 2015 and 2025 (92/121, 76.0%). Publication output peaked in 2025 (24/121), with high publication activity also noted in 2020, 2021, and 2023. Geographically, most studies originated from North America (48/121, 39.7%) and Europe (45/121, 37.2%), with more limited representation from Asia (16/121, 13.2%), Oceania (7/121, 5.8%), South America (4/121, 3.3%), and Africa (1/121, 0.8%; Figures 2 and 3).

Table 1. Characteristics of studies included within the scoping review examining digital communication skills training in medical education (n=121).
Domain and featureValues, n (%)
Study design
Quasi-experimental 51 (42.1) 
Cross-sectional study 29 (24) 
Nonrandomized comparative study 10 (8.3) 
Mixed methods 8 (6.6) 
Qualitative research 2 (1.7)
Descriptive case study 1 (0.8) 
Randomized controlled trial20 (16.5) 
Location (by region)
North America48 (39.7)
Europe45 (37.2)
Asia16 (13.2)
Oceania7 (5.8)
South America4 (3.3)
Africa1 (0.8)
Number of students
0‐5060 (49.6)
51‐10028 (231)
>10031 (25.6)
Not reported2 (1.7)
Learner level
Undergraduate71 (58.7)
Postgraduate45 (37.2)
Mixed5 (4.1)
Communication skills focus
General communication and history taking62 (51.2)
Difficult conversations (BBNa, GOCb, DOMEc)27 (22.3)
Communication with defined patient groups15 (12.4)
Shared decision-making and consent7 (5.8)
Telecommunication/remote consultations5 (4.1)
Patient education and information giving5 (4.1)

aBBN: breaking bad news.

bGOC: goals of care.

cDOME: disclosure of medical error.

Figure 2. Distribution of studies examining digital communication skills training in medical education by year of publication, illustrating yearly publication trends in digital communication skills training research.
Figure 3. Geographic distribution of studies examining digital communication skills training in medical education by global region.

Quasi-experimental designs were most common (51/121, 42.1%), followed by cross-sectional studies (29/121, 24%) and randomized controlled trials (20/121, 16.5%), with few using cohort, case study, or qualitative methods. Almost one-half of the studies (60/121, 49.6%) involved fewer than 50 students, while the remainder were split between those with 51‐100 students (28/122, 23.1%) and those with more than 100 (31/121, 25.6%). Most studies focused on undergraduate learners (71/121, 58.7%), with a smaller proportion targeting postgraduates (45/121, 37.2%), and a few included mixed groups (5/121, 4.1%).

In terms of CST, general communication and history taking was the most taught (62/121, 51.2%), followed by difficult conversations, such as breaking bad news, goals of care, or disclosure of medical errors (27/121, 22.3%). A smaller proportion of studies focused on communication with defined patient groups (15/121, 12.4%), including adolescents, children, patients with hearing loss or on dialysis, and individuals receiving psychiatric care. Other areas addressed less frequently were shared decision-making and consent (7/121, 5.8%), telecommunication or remote consultations (5/121, 4.1%), and patient education and information giving (5/121, 4.1%).

Synthesis of Results

Digital Technologies Used
Overview

Using an inductive approach, 3 main approaches to digital communication training were identified: recording-based tools where audio or video was used for teaching, with reflection and feedback [30-80]; live stream platforms that supported synchronous interaction between learners, educators, or simulated patients [81-113]; and VP simulators encompassing both scripted systems and those driven by AI. “Other” digital tools were investigated by 5 further studies [146-150].

Recording-based tools (51/121, 41.8%) mostly used video recording (42/121, 34.7%), with fewer using audio recording (3/121, 2.5%) or the recording functions of videoconferencing platforms (6/121, 5.0%). These typically involved capturing simulated or real consultations for later review, often followed by structured feedback and reflection.

Live stream platforms (33/121, 27.3%) were classified separately and referred exclusively to real-time videoconferencing without recording, typically involving simulated patient encounters, role-play sessions, or workshops with immediate feedback.

VP simulators (32/121, 26.4%) used computer-based or AI-enabled VP to replicate clinical interactions, sometimes in immersive or branching scenarios that allowed repeated practice.

A further 5 studies (5/121, 4.1%) evaluated other digital tools, including AI-assisted case vignettes, patient portals, translation apps, general communication software, and specialty-specific video case platforms.

Recording-Based Digital Tools

The most used approach across all learner groups were recording-based tools [30-80]. These were used fairly equally with postgraduates (26/51, 51%) and undergraduate learners (24/51, 47.1%), demonstrating applicability across training levels, though mixed learner groups were rarely studied (1/51, 2%). Study content was heavily concentrated on general communication and history taking (29/51, 56.9%), followed by difficult conversations such as breaking bad news (11/51, 21.6%). Very few studies addressed communication with defined patient groups (6/51, 11.8%), shared decision-making and consent (4/51, 7.8%), or patient education and information giving (1/51, 2%). The use of these technologies was predominantly beneficial (33/51, 64.7%) [32,34-36,38,41,43-45,48-53,55,56,60-62,64-68,71-73,75-77,79,80], with reported improvements noted in learner performance, reflective capacity, and confidence. However, a notable minority of studies were inconclusive (8/51, 15.7%) [30,33,39,42,54,57,59,63], reported mixed findings (5/51, 9.8%) [31,46,47,74,78], or showed no effect (5/51, 9.8%) [37,40,58,69,70].

Live Stream Platforms

Live stream–based interventions were evaluated in 33 studies [81-113]. These were used more frequently with undergraduate learners (20/33, 60.6%) than postgraduates (12/33, 36.4%), with mixed learner groups rarely studied (1/33, 3%). Study content showed greater diversity: general communication and history taking: 11/33, 33.3%; difficult conversations: 8/33, 24.2%; telecommunication or remote consultations: 5/33, 15.2%; communication with defined population groups: 5/33, 15.2%; shared decision-making and consent: 2/33, 6.1%; and patient education and information giving: 2/33, 6.1%. The use of live-streaming platforms was highly beneficial, with more than two-thirds of studies (23/33, 64.9%) demonstrating positive effects including increased communication confidence [82,85,89,95,101-103,105,108,110], improved self-perceived competence [97,98,100,106,109], high learner satisfaction [86-88,90], and, in some cases, objective gains in communication performance [83,84,91,102,104]. However, a notable proportion reported mixed findings (7/33, 21.2%) [81,92-94,96,111,112], while few studies were inconclusive (1/33, 3.0%) [99], were equivalent to traditional teaching methods (1/33, 3.0%) [113], or demonstrated no measurable effect (1/33, 3.0%) [107].

VP Simulators

VP simulators were evaluated in 32 studies [114-145]. These were used predominantly with undergraduate learners (24/32, 75%), with fewer at the postgraduate level (5/32, 15.6%) or involving mixed learner groups (3/32, 9.4%). Study content returned a concentrated focus similar to recording-based tools, most often addressing general communication and history taking (19/32, 59.4%), followed by difficult conversations (7/32, 21.9%), with minimal attention to communication with defined patient groups (3/32, 9.4%), patient education (2/32, 6.3%), and shared decision-making and consent (1/32, 3.1%). Reported outcomes were generally positive; nearly two-thirds of studies (21/32, 65.6%) found use of VP simulators beneficial [114,119,120,124-127,129,130,132,134-141,143-145], including improvements in communication performance, empathy, and confidence. Notably, 3 of the 32 studies (9.4%) [133,142,151] reported outcomes equivalent to traditional methods. The remaining studies were inconclusive (4/32, 2.5%) [115,116,118,123], reported mixed findings (2/32, 6.3%) [122,131], or showed no effect (2/32, 6.3%) [117,128].

Other Approaches

Tools that did not align with the 3 main categories were evaluated in 5 studies [146-150]. These included AI-assisted case vignettes (1/5, 20%) [146], patient portals for drafting messages (1/5, 20%) [147], speech-to-speech translation apps (1/5, 20%) [148], general communication software (1/5, 20%) [149], and specialty-specific video case platforms (1/5, 20%) [150]. Learners comprised both undergraduates (3/5, 60%) and postgraduates (2/5, 40%). The majority demonstrated improvements (3/5, 60%) in communication practice and confidence [146,147,150]; 1 study was inconclusive (1/5, 20%) [148], and another reported mixed effects (1/5, 20%) [149].

Trends in the Use of Digital Technologies for CST

Across all learner levels, recording-based approaches were present from the earliest period captured within this review (before 2000) and remained the dominant approach until approximately 2020. Live-streaming platforms expanded markedly after 2020, likely accelerated by the COVID-19 pandemic, with publication numbers surging from sporadic use to 6 studies in 2021 and remaining elevated at 7 studies in 2025. Most strikingly, VP simulators demonstrated explosive recent growth, particularly in 2024 with 14 published studies, representing a dramatic shift in the field’s technological focus (Figure S4 in Multimedia Appendix 4).

These trends varied by learner level. In undergraduate education, the surge in VP simulators was particularly pronounced, with 10 studies published in 2024 alone, while live streaming also showed sustained growth from 2021 onward, plateauing at 3 studies annually by 2025 (Figure S5 in Multimedia Appendix 4). In postgraduate education, the pattern differed notably: Recording-based approaches dominated until the mid-2010s before declining to near absence by 2025, while live streaming showed steady growth after 2020, peaking at 4 studies in 2021‐2022 and 2025 (Figure S6 in Multimedia Appendix 4). VP simulators remained less prominent at the postgraduate level compared with undergraduate settings, showing only gradual uptake in recent years without the dramatic spike observed in undergraduate training.

Training Assessment Tools Reported

Assessment tools for measuring impact of digital technology on CST varied widely across studies (see Table 2 and Figure S7 in Multimedia Appendix 5). Outcome measures were categorized as either objective or self-reported and further classified by validation status (validated, nonvalidated, or unknown). Classification was based on the type and clarity of evidence regarding communication skill development.

Table 2. Assessment tools used by studies examining digital communication skills training in medical education to evaluate outcomes, stratified by learner level and validation status.
Learner level and assessment typeValidated, n (%)aNonvalidated, n (%)aUnknown, n (%)a
Undergraduate (n=71)
Objective (n=39)16 (41)16 (41)7 (17.9)
Self-reported (n=32)0 (0)32 (100)0 (0)
Postgraduate (n=45)
Objective (n=31)16 (51.6)7 (22.6)8 (25.8)
Self-reported (n=14)0 (0)13 (92.9)1 (7.1)
Mixed (n=5)
Objective (n=2)2 (100)0 (0)0 (0)
Self-reported (n=3)1 (33.3)2 (66.7)0 (0)

aPercentages represent the proportion of each validation category within the respective assessment type (objective or self-reported) for each learner level.

Assessment patterns varied by learner level. Among undergraduate learners, objective measures were used slightly more frequently (39/71, 54.9%) than self-reported assessments (32/71, 45.1%). In contrast, postgraduate studies showed a clearer preference for objective assessment (31/45, 68.9%) compared with self-reported tools (14/45, 31.1%). Across all studies, a significant validation gap emerged: Most studies used nonvalidated assessment tools (71/121, 58.7%), while fewer used validated measures (35/121, 28.9%) and a minority having unknown validation status (16/121, 13.2%).

Only 34 of the 121 studies (28.1%) assessed outcomes with a validated objective measure. The objective measures commonly included global rating scales, structured checklists, and simulated patient assessments. A subset of studies used validated frameworks, such as the Calgary-Cambridge Observation Guide, Communication Assessment Tool, SEGUE framework, SPIKES protocol, and the Jefferson Scale of Physician Empathy. Several studies also incorporated qualitative methods such as thematic analysis of interviews or open-ended feedback responses.

Reported Impact of Digital Technology in Relation to Presence of Comparator Group

Study outcomes were grouped into 5 categories: beneficial, mixed, equivalent, inconclusive, or no effect. Studies were classified as beneficial if they included either objective measures (eg, assessor ratings, checklist scores) or self-reported reflections that indicated improvement or effective demonstration of communication skills. Studies were classified as mixed if they reported improvement in certain aspects of communication skills while showing no improvement or even decline in others. Studies were considered equivalent if they compared a digital intervention to another method (eg, traditional teaching or an alternative digital tool) and found no significant difference in communication skill outcomes between groups. Studies were considered inconclusive if they lacked sufficient outcome data to assess impact on communication skills. Finally, studies were labeled no effect if they explicitly reported no improvement in communication performance postintervention.

Most studies demonstrating positive effects of digital technology for CST (80/121) lacked robust methodological designs. Nearly three-quarters (60/81, 74.1%) used either within-group designs (39/80, 48.8%) or had no comparator at all (20/80, 25%), limiting causal inference. Fewer studies (21/80, 26.2%) included an external comparison group, most often comparing against traditional teaching (11/80, 13.8%), followed by no-intervention controls (4/80, 5%), alternative digital tools (3/80, 3.8%), and other nontraditional teaching methods (3/80, 3.8%).

The use of validated assessment tools further weakened the evidence base: Only 26 of the 80 positive studies (32.5%) used validated measures. Among these 26 validated studies, one-half used within-group designs (13/26, 50%), with smaller proportions comparing against traditional teaching (5/26, 19.2%), using no comparator (3/26, 11.5%), no-intervention controls (2/26, 7.7%), other nontraditional teaching (2/26, 7.7%), and alternative digital tools (1/26, 3.8%). Overall, the predominantly positive findings in this field arise largely from study designs with limited methodological rigor and inconsistent use of validated measures, substantially reducing confidence in causal claims about technology effectiveness (Table 3).

Table 3. Comparator groups used in studies examining digital communication skills training in medical education that report beneficial effects of digital communication skills training.
Type of comparatorTotal beneficial studies (n=81), n (%)aBeneficial studies using validated measures (n=26), n (%)b
No comparator20 (25)3 (11.5)
Within-group39 (48.8)13 (50)
Traditional teaching11 (13.8)5 (19.2)
Alternative digital tool3 (3.8)1 (3.8)
Other nontraditional teaching3 (3.8)2 (7.7)
No-intervention control4 (5)2 (7.7)

aPercentages were calculated using the total number of studies reporting beneficial outcomes as the denominator (n=80).

bPercentages were calculated using the total number of beneficial studies using validated outcome measures as the denominator (n=26).

Stage of Kolb’s Experiential Learning Cycle

Studies predominantly situated digital technologies within the early phases of Kolb’s experiential learning cycle [20] with 120 of 121 studies (99.2%) focusing solely on Stage 1 (Concrete Experience), Stage 2 (Reflective Observation), or both. Nearly one-half of studies (54/121, 44.6%) integrated both stages, typically through combinations of simulated encounters and structured feedback, making this the most common approach. Among studies focusing on a single stage, Stage 1 alone was addressed in 28.9% of studies (35/121), while Stage 2 alone was addressed in 25.6% (31/121).

Progression beyond these initial phases was virtually absent: No studies aligned primarily with Stage 3 (Abstract Conceptualization: 0/121, 0%), and only 1 study extended to Stage 4 (Active Experimentation: 1/121, 0.8%). Figure S8 in Multimedia Appendix 6 shows the distribution of studies across Kolb’s learning stages by technology type, demonstrating that all digital modalities concentrated overwhelmingly on Stages 1 and 2.

Outcomes Mapped to Kirkpatrick’s Training Evaluation Model

All included studies reported outcomes that could be classified using Kirkpatrick’s 4-level model [21]. The vast majority focused on classroom or simulation-level outcomes: A significant number of studies (91/121, 75.2%) were evaluated at Level 2, measuring changes in learners’ knowledge, skills, or observed performance in simulated environments, while nearly all others (23/121, 19%) assessed Level 1 outcomes such as learner satisfaction, confidence, or perceived usefulness. Combined, 94.2% of studies (114/121) measured outcomes within educational settings rather than clinical practice.

In contrast, evaluation of real-world impact was minimal. Only 6 of the 121 studies (5%) assessed behavior change in clinical practice at Kirkpatrick’s Level 3, and 1 study (1/121, 0.8%) examined patient-level outcomes at Level 4. This concentration in Levels 1 and 2 indicates that, although there is substantial evidence for the educational effectiveness of digital technologies in controlled settings, the field has generated almost no evidence regarding their impact on clinical practice or patient outcomes. Figure S9 in Multimedia Appendix 7 presents the distribution of studies across technology types and Kirkpatrick levels, illustrating the concentration at Levels 1 and 2 with minimal progression to behavior change (Level 3) or patient outcomes (Level 4).

AI in CST

A small but emerging subset of studies (15/121, 12.4%) [50,116,127,130,131,135-138,140,142-144,150,152] explicitly incorporated AI into CST, spanning undergraduate (11/15, 73.3%), postgraduate (3/15, 20%), and mixed learner populations (1/15, 6.7%). AI applications were concentrated predominantly within VP simulators (12/15, 80%) [116,127,130,131,135-138,140,142-144], with smaller numbers categorized under other digital interventions (2/15, 13.3%) [146,150] and video-recording approaches (1/15, 6.7%) [50]. These AI applications primarily simulated patient interactions and delivered automated feedback, supporting the development of skills such as history taking, information gathering, interview techniques, nonverbal behavior, and empathy.

Reported outcomes were largely positive, with 12 of the 15 studies (80%) demonstrating beneficial effects on communication skills, a higher success rate than digital technologies overall. However, the remaining studies showed more limited evidence: 1 reported mixed findings [131], 1 was inconclusive [116], and 1 other found outcomes equivalent to traditional actor-based training [142]. Notably, the study with mixed findings [131], which relied on self-reported outcomes, highlighted limitations in AI realism and did not provide clear evidence of impact, suggesting that the effectiveness of AI may depend on both implementation quality and measurement approach. The distribution of evidence across technology types, communication domains, learner levels, Kolb’s learning stages, and Kirkpatrick’s evaluation levels is presented in the evidence and gap map (Multimedia Appendix 7), which visually highlights areas of concentrated research and gaps in the current evidence base.


Summary of Principal Findings

This review mapped the digital technologies used for CST and examined reported outcomes through the lens of experiential learning theory and translational evaluation frameworks. The evidence base has expanded substantially over the past decade, with notable acceleration following the COVID-19 pandemic, though research remains concentrated in North America and Europe and predominantly involves undergraduate learners. Three principal technology categories dominated the field: recording-based approaches, live-streaming platforms, and VP simulators. Application of Kolb’s experiential learning cycle [20] and Kirkpatrick’s evaluation model [21] revealed that current digital interventions overwhelmingly support early-stage learning processes and are evaluated almost exclusively within educational settings, with minimal evidence of progression to deeper competency development or translation into clinical practice and patient outcomes.

Comparison of Findings and Interpretation Within the Context of Literature

The field of digital CST has undergone substantial transformation in recent years. Live-streaming platforms expanded markedly after 2020, likely accelerated by the shift to virtual learning environments during the COVID-19 pandemic, while VP simulators demonstrated the most striking recent growth. These trends mirror broader patterns observed in the adoption of digital tools in medical education [15] and in the expansion of virtual simulation for CST [17]. Despite this technological diversification, recording-based approaches that enable structured feedback and reflection continue to be widely used, demonstrating their continued relevance in CST.

Study content across technologies varied in ways that have implications for curriculum design. Recording-based tools and VP simulators were heavily focused on general communication and history taking, with more limited attention to areas such as shared decision-making, patient education, or communication with specific patient populations. Live stream platforms, by contrast, demonstrated greater diversity across communication domains, including telecommunication and remote consultation skills. This uneven exploration suggests that some modalities may be more readily adapted to diverse communication scenarios than others, though the reasons for these differences remain unclear and warrant further investigation.

The evidence base, although growing, reveals substantial methodological limitations that constrain confidence in reported outcomes. The vast majority of studies demonstrating beneficial effects relied on designs with limited capacity for causal inference, predominantly using within-group comparisons or lacking comparators entirely. Validation of outcome measures was similarly inconsistent, with most studies using nonvalidated tools. These patterns indicate that, although digital technologies show promise, the strength of evidence supporting their effectiveness remains weak [153]. Notwithstanding, live stream platforms showed both the highest rate of beneficial outcomes and greater diversity in the communication domains addressed, suggesting this modality may offer particular flexibility in supporting varied learning objectives [154,155]. Although most VP simulator studies demonstrated beneficial outcomes, a small number of studies reported outcomes equivalent to traditional methods, raising questions about when and for whom VPs offer advantages over established approaches. These patterns suggest that effectiveness may depend not only on the technology itself but also on how it is implemented, for what learning objectives, and with which learner populations [156].

Application of educational theory frameworks revealed two striking gaps in current research. First, digital technologies are almost exclusively used to support the initial phases of experiential learning (ie, concrete experience and reflective observation), with virtually no progression to abstract conceptualization or active experimentation. This concentration suggests that current technologies facilitate earlier learning processes for CST, stopping short of the deeper conceptual understanding and experimental application that characterize advanced skill development [20]. Second, outcome evaluation overwhelmingly focused on classroom and simulation-level measures, with minimal assessment of whether learning gains translate to actual clinical practice or patient outcomes. This near absence of real-world evaluation represents a critical knowledge gap of whether the educational benefits observed in controlled settings yield meaningful improvements in patient care [1,2].

The findings of this review are consistent with previous observations that research on CST in medical education is dominated by descriptive and quasi-experimental designs, with relatively few robust comparative studies [157]. Our analysis adds new depth by demonstrating that most studies reporting beneficial outcomes relied on within-group designs or lacked comparators entirely and validated outcome measures were used in few positive studies. This methodological profile reveals an evidence base that remains largely exploratory rather than definitive, a pattern that was particularly evident during the rapid post-COVID-19 adoption of digital tools, when educational continuity was prioritized over rigorous evaluation [15]. The heterogeneity in study designs and outcome measures continues to hamper synthesis of effectiveness claims across technologies [11].

AI-enabled CST represented a small proportion of the included studies, with most applications embedded within VP simulators and focused on automating patient interactions or feedback. Although many of these studies reported beneficial effects, few used validated communication outcome measures, and evidence was largely derived from noncomparative designs and proximal learning outcomes. Recent literature [158] has similarly noted that, although interest in AI-supported communication skills education is increasing, research to date has predominantly focused on feasibility and instructional potential, with limited attention to the psychometric validation of AI-specific assessment approaches. Together, these findings suggest that AI-supported CST remains at an early stage of evidence development, requiring more rigorous and theory-aligned evaluation to support meaningful comparison and wider implementation.

A recent meta-analysis of 3 randomized controlled trials [11] assessed digital technology to be as effective as traditional teaching for developing communication skills, though low-quality evidence limited confidence in the finding. Our review supported this conclusion but extends the evidence base by including a wider range of study designs and technologies and confirmed that many reports of benefit arise from weaker methodological approaches. The finding that a minority of studies included external comparators means that most claims of benefit rest on uncontrolled observations, providing weak grounds for judging whether digital approaches outperform, match, or fall short of established methods. Digital technologies therefore have potential in this field, but stronger study designs and robust outcomes, also advocated by Gilligan et al [6], are required to achieve meaningful conclusions and would enhance confidence for attributing gains directly to the interventions. Additionally, our study has illuminated the positioning of technologies for CST within educational pedagogy and translation into clinical outcomes. Evidence of enhancement was most apparent in formative learning contexts. Learners frequently reported increased confidence, satisfaction, and perceived competence, and studies using video recording or conferencing highlighted the value of repeated practice, self-reflection, and timely feedback. These features may offer advantages in certain learning domains, especially where opportunities for repeated practice, asynchronous reflection, or scalable feedback are limited in traditional classroom-based approaches, which typically emphasize role-play and face-to-face interactions [6,159].

Cook and Ellaway [19] previously proposed an evaluation framework to aid synthesis of technology-enhanced learning that focused on the implementation process. Alternatively, placing intervention assessment within education theory, as in this review, can aid comparison while additionally providing insight into the learning and teaching process, thus informing future curricula development. However, the concentration of outcomes at Kirkpatrick Levels 1 and 2 (satisfaction and skill demonstration in educational settings) [21] with minimal assessment of behavior change in practice or patient outcomes raises a fundamental question about translational impact. The persistent emphasis on proximal learning outcomes may reflect pragmatic constraints, as behavior change and patient-level outcomes are more difficult and costly to measure, but it leaves largely unanswered whether communication skills developed through digital training transfer to clinical practice and ultimately improve patient care. This gap between educational and clinical outcomes has been noted in wider educational research, which often prioritizes measures that are easier to capture [160]. However, the near-total absence of practice-level and patient-level evaluation in this field is particularly concerning given the fundamental premise that CST should ultimately enhance patient care. The application of Kirkpatrick’s framework in this review makes this gap explicit: Demonstrating that learners can perform skills in simulated settings does not establish that they will apply those skills effectively in real clinical encounters or that such application improves patient outcomes.

Strengths and Limitations

To our knowledge, this is the first broad and comprehensive review of digital technologies for CST in both undergraduate and postgraduate medical education. Our systematic approach, guided by predefined eligibility criteria and mapped to established educational frameworks, enabled synthesis across diverse study designs and outcomes. This approach not only provided a structured view of where and how digital technologies are being applied within CST but also revealed patterns in how learning is being conceptualized and measured across different educational contexts.

Several limitations should be noted. This was a scoping review; therefore, a formal risk of bias or quality assessment was not undertaken. However, the extracted data clearly indicate that many studies were of low methodological quality, often relying on small samples, weak and noncomparative designs, and unvalidated outcome measures. The predominance of such designs limits the strength of causal claims that can be made about technology effectiveness. Across the included studies, no digital intervention was associated with a clear deterioration in communication skills. Where benefits were not observed, findings were most often neutral or mixed rather than negative. Although this may suggest that digital approaches are unlikely to cause harm, it may represent publication bias, outcome measurement limitations, and the methodological weaknesses evident across most included studies, which lack the rigor needed to detect potential adverse effects. In addition, rapid advances in technology mean that some emerging tools may not yet be captured in the published literature. Gray literature was not searched, and, although such sources often lack the detailed outcome reporting necessary for mapping to theoretical frameworks, this exclusion may have resulted in some relevant evidence being missed. However, the impact of this is likely mitigated by the large number of studies included in our review. Most included studies were conducted in North America or Europe, potentially limiting the generalizability of findings to developing countries, as infrastructure, personal cost, and institutional constraints shape how digital technologies are used in medical training globally [161].

Although Kolb’s experiential learning cycle provided a useful framework for examining which learning processes were targeted by digital CST interventions, we acknowledge that contemporary cognitive and neuroscience research suggests learning may occur nonlinearly, with stages overlapping or occurring simultaneously rather than sequentially [162-164]. In this review, all stages of Kolb’s cycle supported by each intervention were recorded, and we recorded multiple stages of learning when appropriate. More than one-half of the included studies supported both concrete experience and reflective observation, reflecting common pedagogical designs that combine simulated encounters with structured reflection. Despite allowing for overlap, very few studies extended to later stages of experiential learning, with almost no interventions supporting abstract conceptualization or active experimentation. This pattern, whether viewed through a sequential or nonlinear lens, indicates that digital technologies as currently designed and evaluated support initial skill acquisition and reflection but do not facilitate progression toward the conceptual understanding and experimental application needed for mastery.

Conclusions and Recommendations

Digital CST interventions showed promise for supporting early-stage learner outcomes, such as improved confidence, satisfaction, and knowledge or skill acquisition. However, there was little evidence that these technologies supported progression to longer-term competency development in either undergraduate or postgraduate learners, and the translation of digitally acquired skills into improved patient communication or care remains largely unevaluated. Although these technologies offer scalable, flexible training opportunities, current evidence does not establish their effectiveness relative to traditional methods or their impact on clinical practice. Digital CST should be integrated thoughtfully within broader curricula and accompanied by rigorous evaluation extending beyond educational settings to assess impact on patient outcomes. Several priorities emerge for future research and practice. First, greater methodological rigor in evaluation is essential, including comparative designs that establish whether digital approaches match, exceed, or fall short of traditional training, and consistent use of validated outcome measures to ensure robust assessment. Second, research must extend beyond educational outcomes to assess whether learning gains translate to clinical practice and patient care to determine whether learning gains translate beyond educational settings. Third, investigation is needed into how digital technologies might support progression through complete learning cycles, moving beyond initial skill acquisition and reflection to support conceptual understanding and experimental application in varied contexts [162-164]. Finally, as AI-enabled approaches continue to emerge in CST, there is opportunity to design them not only as training tools but also as technologies that scaffold competency development and translation into improved patient outcomes.

Acknowledgments

The authors declare that no generative artificial intelligence (AI) tools were used in the research or writing process. According to the GAIDeT taxonomy (2025), no tasks were delegated to generative AI tools. Responsibility for the final manuscript lies entirely with the authors. Generative AI tools are not listed as authors and do not bear responsibility for the final outcomes. Declaration submitted by PS (on behalf of all authors).

Funding

This research did not receive specific external funding. The work was supported through the authors’ affiliation with the University of Nottingham.

Data Availability

The datasets generated or analyzed during this study are available from the corresponding author on reasonable request.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Search strategy.

DOCX File, 31 KB

Multimedia Appendix 2

Blank data extraction form.

XLSX File, 14 KB

Multimedia Appendix 3

Summary table of data extracted from included studies by category of technology.

DOCX File, 69 KB

Multimedia Appendix 4

Distribution of studies examining digital communication skills training in medical education across learner groups by technology type and year of publication.

PDF File, 592 KB

Multimedia Appendix 5

Validation status of assessment tools used in studies examining digital communication skills training in medical education, stratified by learner level.

PNG File, 234 KB

Multimedia Appendix 6

Evidence gap map showing distribution of included studies across Kolb’s experiential learning cycle and Kirkpatrick’s evaluation model by technology type.

PDF File, 221 KB

Multimedia Appendix 7

Evidence gap map of studies examining digital communication skills training in medical education.

PDF File, 425 KB

Checklist 1

PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) checklist.

DOCX File, 112 KB

Checklist 2

PRISMA-S (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Literature Searches) checklist.

DOCX File, 25 KB

  1. Coulter A, Collins A. Making shared decision-making a reality: no decision about me, without me. The King’s Fund. 2011. URL: www.kingsfund.org.uk [Accessed 2026-04-03]
  2. Campos CFC, Olivo CR, Martins MDA, Tempski PZ. Physicians’ attention to patients’ communication cues can improve patient satisfaction with care and perception of physicians’ empathy. Clinics (Sao Paulo). 2024;79:100377. [CrossRef] [Medline]
  3. Good medical practice. General Medical Council. 2018. URL: https://www.gmc-uk.org/professional-standards/the-professional-standards/good-medical-practice [Accessed 2026-04-03]
  4. Good medical practice: a code of conduct for doctors in Australia. Australian Medical Council. URL: https://www.amc.org.au/images/Final_Code.pdf [Accessed 2026-04-03]
  5. Aspegren K, Lønberg-Madsen P. Which basic communication skills in medicine are learnt spontaneously and which need to be taught and trained? Med Teach. Sep 2005;27(6):539-543. [CrossRef] [Medline]
  6. Gilligan C, Powell M, Lynagh MC, et al. Interventions for improving medical students’ interpersonal communication in medical consultations. Cochrane Database Syst Rev. Feb 8, 2021;2(2):CD012418. [CrossRef] [Medline]
  7. Rigby PG, Gururaja RP. World medical schools: the sum also rises. JRSM Open. Jun 2017;8(6):2054270417698631. [CrossRef] [Medline]
  8. Global strategy on human resources for health: workforce 2030. World Health Organization. 2020. URL: https://www.who.int/publications/i/item/9789241511131 [Accessed 2026-04-03]
  9. Pressures in general practice. British Medical Association. 2026. URL: https:/​/www.​bma.org.uk/​advice-and-support/​nhs-delivery-and-workforce/​pressures/​pressures-in-general-practice-data-analysis [Accessed 2026-04-03]
  10. Rotenstein LS, Brown R, Sinsky C, Linzer M. The association of work overload with burnout and Intent to leave the job across the healthcare workforce during COVID-19. J Gen Intern Med. Jun 2023;38(8):1920-1927. [CrossRef] [Medline]
  11. Kyaw BM, Posadzki P, Paddock S, Car J, Campbell J, Tudor Car L. Effectiveness of digital education on communication skills among medical students: systematic review and meta-analysis by the digital health education collaboration. J Med Internet Res. Aug 27, 2019;21(8):e12967. [CrossRef] [Medline]
  12. McGee RG, Wark S, Mwangi F, et al. Digital learning of clinical skills and its impact on medical students’ academic performance: a systematic review. BMC Med Educ. Dec 18, 2024;24(1):1477. [CrossRef] [Medline]
  13. Kyaw BM, Posadzki P, Dunleavy G, et al. Offline digital education for medical students: systematic review and meta-analysis by the digital health education collaboration. J Med Internet Res. Mar 25, 2019;21(3):e13165. [CrossRef] [Medline]
  14. Ahmady S, Kallestrup P, Sadoughi MM, et al. Distance learning strategies in medical education during COVID-19: a systematic review. J Educ Health Promot. 2021;10(1):421. [CrossRef] [Medline]
  15. Gordon M, Daniel M, Ajiboye A, et al. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. Med Teach. Apr 2, 2024;46(4):446-470. [CrossRef]
  16. Liaw SY, Ooi SW, Rusli KDB, Lau TC, Tam WWS, Chua WL. Nurse-physician communication team training in virtual reality versus live simulations: randomized controlled trial on team communication and teamwork attitudes. J Med Internet Res. Apr 8, 2020;22(4):e17279. [CrossRef] [Medline]
  17. Fernández-Alcántara M, Escribano S, Juliá-Sanchis R, et al. Virtual simulation tools for communication skills training in health care professionals: literature review. JMIR Med Educ. May 6, 2025;11:e63082. [CrossRef] [Medline]
  18. Stamer T, Steinhäuser J, Flägel K. Artificial intelligence supporting the training of communication skills in the education of health care professions: scoping review. J Med Internet Res. Jun 19, 2023;25:e43311. [CrossRef] [Medline]
  19. Cook DA, Ellaway RH. Evaluating technology-enhanced learning: a comprehensive framework. Med Teach. 2015;37(10):961-970. [CrossRef] [Medline]
  20. Kolb D. Experiential Learning: Experience as the Source of Learning and Development. 2nd ed. Pearson FT Press; 2015. ISBN: 978-0-13-389240-6
  21. Kirkpatrick JD, Kirkpatrick WK. Kirkpatrick’s Four Levels of Training Evaluation. Association for Talent Development; 2016. ISBN: 1607281023
  22. Pollock D, Peters MDJ, Khalil H, et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evid Synth. Mar 1, 2023;21(3):520-532. [CrossRef] [Medline]
  23. Tricco AC, Lillie E, Zarin W, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. Oct 2, 2018;169(7):467-473. [CrossRef] [Medline]
  24. Seripenah P, Emery H, Tyrrell E, Carson J, Leonardi-Bee J, Evans C, et al. The Use of Digital Technology For Developing Communication Skills in Undergraduate and Post-Graduate Medical Education: A Scoping Review Protocol. Open Science Framework; 2024. URL: https://osf.io/gmfe3/overview [Accessed 2026-04-03]
  25. Kurtz S, Silverman J, Draper J, Dalen J, Platt FW. Teaching and Learning Communication Skills in Medicine. 2nd ed. CRC Press; 2017. [CrossRef]
  26. Rethlefsen ML, Kirtley S, Waffenschmidt S, et al. PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst Rev. Jan 26, 2021;10(1):39. [CrossRef] [Medline]
  27. DeepL Translate. DeepL. URL: https://www.deepl.com/en/translator [Accessed 2026-04-03]
  28. Covidence. URL: https://www.covidence.org [Accessed 2026-04-03]
  29. Bewley WL, O’Neil HF. Evaluation of medical simulations. Mil Med. Oct 2013;178(10 Suppl):64-75. [CrossRef] [Medline]
  30. Beyth Y, Hardoff D, Rom E, Ziv A. A simulated patient-based program for training gynecologists in communication with adolescent girls presenting with gynecological problems. J Pediatr Adolesc Gynecol. Apr 2009;22(2):79-84. [CrossRef] [Medline]
  31. Bonnaud-Antignac A, Campion L, Pottier P, Supiot S. Videotaped simulated interviews to improve medical students’ skills in disclosing a diagnosis of cancer. Psychooncology. Sep 2010;19(9):975-981. [CrossRef] [Medline]
  32. Bos-van den Hoek DW, van Laarhoven HWM, Ali R, et al. Blended online learning for oncologists to improve skills in shared decision making about palliative chemotherapy: a pre-posttest evaluation. Support Care Cancer. Feb 23, 2023;31(3):184. [CrossRef] [Medline]
  33. Bußenius L, Kadmon M, Berberat PO, Harendza S. Evaluating the Global Rating scale’s psychometric properties to assess communication skills of undergraduate medical students in video-recorded simulated patient encounters. Patient Educ Couns. Mar 2022;105(3):750-755. [CrossRef] [Medline]
  34. Cals JWL, Scheppers NAM, Hopstaken RM, et al. Evidence based management of acute bronchitis; sustained competence of enhanced communication skills acquisition in general practice. Patient Educ Couns. Nov 2007;68(3):270-278. [CrossRef] [Medline]
  35. Carrard V, Bourquin C, Stiefel F, Schmid Mast M, Berney A. Undergraduate training in breaking bad news: a continuation study exploring the patient perspective. Psychooncology. Feb 2020;29(2):398-405. [CrossRef] [Medline]
  36. Denizon Arranz S, Blanco Canseco JM, Pouplana Malagarriga MM, et al. Multi-source evaluation of an educational program aimed at medical students for interviewing/taking the clinical history using standardized patients. GMS J Med Educ. 2021;38(2):Doc40. [CrossRef] [Medline]
  37. Dohms MC, Collares CF, Tibério IC. Video-based feedback using real consultations for a formative assessment in communication skills. BMC Med Educ. Feb 24, 2020;20(1):57. [CrossRef] [Medline]
  38. Dowling S, Rouse M, Farrell J, Gannon L, Sullivan G, Cussen K. The acceptability, feasibility and educational impact of a new tool for formative assessment of the consultation performance of specialty registrars in an Irish general practice training scheme. Educ Prim Care. Jan 2007;18(6):724-735. [CrossRef]
  39. Evens S, Curtis P. Using patient-simulators to teach telephone communication skills to health professionals. J Med Educ. Nov 1983;58(11):894-898. [CrossRef] [Medline]
  40. Farnill D, Todisco J, Hayes SC, Bartlett D. Videotaped interviewing of non-English speakers: training for medical students with volunteer clients. Med Educ. Mar 1997;31(2):87-93. [CrossRef] [Medline]
  41. Fischbeck S, Hardt J, Malkewitz C, Petrowski K. Evaluation of a digitized physician-patient-communication course evaluated by preclinical medical students: a replacement for classroom education? GMS J Med Educ. 2020;37(7):Doc85. [CrossRef] [Medline]
  42. Freytag J, Chu J, Hysong SJ, et al. Acceptability and feasibility of video-based coaching to enhance clinicians’ communication skills with patients. BMC Med Educ. Feb 8, 2022;22(1):85. [CrossRef] [Medline]
  43. Hardoff D, Gefen A, Sagi D, Ziv A. Training physicians toward a dignifying approach in adolescents’ health care: a promising simulation-based medical education program. Isr Med Assoc J. Aug 2016;18(8):484-488. [Medline]
  44. Harnof S, Hadani M, Ziv A, Berkenstadt H. Simulation-based interpersonal communication skills training for neurosurgical residents. Isr Med Assoc J. Sep 2013;15(9):489-492. [Medline]
  45. Harendza S, Bußenius L, Gärtner J, Heuser M, Ahles J, Prediger S. “Fit for the finals” - project report on a telemedical training with simulated patients, peers, and assessors for the licensing exam. GMS J Med Educ. 2023;40(2):Doc17. [CrossRef] [Medline]
  46. Hulsman RL, Harmsen AB, Fabriek M. Reflective teaching of medical communication skills with DiViDU: assessing the level of student reflection on recorded consultations with simulated patients. Patient Educ Couns. Feb 2009;74(2):142-149. [CrossRef] [Medline]
  47. Hulsman RL, van der Vloodt J. Self-evaluation and peer-feedback of medical students’ communication skills using a web-based video annotation system. Exploring content and specificity. Patient Educ Couns. Mar 2015;98(3):356-363. [CrossRef] [Medline]
  48. Kaltman S, Talisman N, Pennestri S, Syverson E, Arthur P, Vovides Y. Using technology to enhance teaching of patient-centered interviewing for early medical students. Simul Healthc. Jun 2018;13(3):188-194. [CrossRef] [Medline]
  49. Knowles C, Kinchington F, Erwin J, Peters B. A randomised controlled trial of the effectiveness of combining video role play with traditional methods of delivering undergraduate medical education. Sex Transm Infect. Oct 2001;77(5):376-380. [CrossRef] [Medline]
  50. Kobayashi M, Katayama M, Hayashi T, et al. Effect of multimodal comprehensive communication skills training with video analysis by artificial intelligence for physicians on acute geriatric care: a mixed-methods study. BMJ Open. Mar 3, 2023;13(3):e065477. [CrossRef] [Medline]
  51. Liu C, Lim RL, McCabe KL, Taylor S, Calvo RA. A web-based telehealth training platform incorporating automated nonverbal behavior feedback for teaching communication skills to medical students: a randomized crossover study. J Med Internet Res. Sep 12, 2016;18(9):e246. [CrossRef] [Medline]
  52. Makaricheva EV, Mutigullina AA. Application of situated and simulation-based learning technologies in the formation of communication skills in medical students. Cardiovasc Ther Prev. 2024;23(3S):4216. URL: https://cardiovascular.elpub.ru/jour/issue/view/217 [CrossRef]
  53. Mauksch L, Farber S, Greer HT. Design, dissemination, and evaluation of an advanced communication elective at seven U.S. medical schools. Acad Med. Jun 2013;88(6):843-851. [CrossRef] [Medline]
  54. Mohos A, Mester L, Barabás K, Nagyvári P, Kelemen O. Doctor-patient communication training with simulated patient during the coronavirus pandemic. Orv Hetil. Aug 2020;161(33):1355-1362. [CrossRef] [Medline]
  55. Moulton CA, Tabak D, Kneebone R, Nestel D, MacRae H, LeBlanc VR. Teaching communication skills using the integrated procedural performance instrument (IPPI): a randomized controlled trial. Am J Surg. Jan 2009;197(1):113-118. [CrossRef] [Medline]
  56. Müller E, Diesing A, Rosahl A, Scholl I, Härter M, Buchholz A. Evaluation of a shared decision-making communication skills training for physicians treating patients with asthma: a mixed methods study using simulated patients. BMC Health Serv Res. Aug 30, 2019;19(1):612. [CrossRef] [Medline]
  57. Naik N, Greenwald P, Hsu H, Harvey K, Clark S, Sharma R. “Web-side manner”: a simulation-based, telemedicine communication curriculum. Ann Emerg Med. 2018;72(4 (Supplement):S105. [CrossRef] [Medline]
  58. Noordman J, Post B, van Dartel AAM, Slits JMA, Olde Hartman TC. Training residents in patient-centred communication and empathy: evaluation from patients, observers and residents. BMC Med Educ. May 2, 2019;19(1):128. [CrossRef] [Medline]
  59. Noordman J, Verhaak P, van Dulmen S. Web-enabled video-feedback: a method to reflect on the communication skills of experienced physicians. Patient Educ Couns. Mar 2011;82(3):335-340. [CrossRef] [Medline]
  60. Öhlmann H, Icenhour A, Elsenbruch S, Benson S. “Powerful placebo”: a teaching and learning concept addressing placebo and nocebo effects in competency-based communication training. GMS J Med Educ. 2024;41(4):Doc38. [CrossRef] [Medline]
  61. Ozcakar N, Mevsim V, Guldal D, et al. Is the use of videotape recording superior to verbal feedback alone in the teaching of clinical skills? BMC Public Health. Dec 19, 2009;9:474. [CrossRef] [Medline]
  62. Perron NJ, Louis-Simonet M, Cerutti B, Pfarrwaller E, Sommer J, Nendaz M. Feedback based on videotaped consultations or immediately after direct observation: which is more effective. Praxis. 2015;104. URL: https://econtent.hogrefe.com/doi/10.1024/1661-8157/a001994 [CrossRef] [Medline]
  63. Pless A, Hari R, Brem B, Woermamm U, Schnabel KP. Using self and peer video annotations of simulated patient encounters in communication training to facilitate the reflection of communication skills: an implementation study. GMS J Med Educ. 2021;38(3):Doc55. [CrossRef] [Medline]
  64. Ravitz P, Lancee WJ, Lawson A, et al. Improving physician-patient communication through coaching of simulated encounters. Acad Psychiatry. Mar 1, 2013;37(2):87-93. [CrossRef] [Medline]
  65. Roter DL, Larson S, Shinitzky H, et al. Use of an innovative video feedback technique to enhance communication skills training. Med Educ. Feb 2004;38(2):145-157. [CrossRef] [Medline]
  66. Roter DL, Hall JA, Kern DE, Barker LR, Cole KA, Roca RP. Improving physicians’ interviewing skills and reducing patients’ emotional distress. A randomized clinical trial. Arch Intern Med. Sep 25, 1995;155(17):1877-1884. [Medline]
  67. Ruesseler M, Sterz J, Bender B, Hoefer S, Walcher F. The effect of video-assisted oral feedback versus oral feedback on surgical communicative competences in undergraduate training. Eur J Trauma Emerg Surg. Aug 2017;43(4):461-466. [CrossRef] [Medline]
  68. Scardovi A, Rucci P, Gask L, et al. Improving psychiatric interview skills of established GPs: evaluation of a group training course in Italy. Fam Pract. Aug 2003;20(4):363-369. [CrossRef] [Medline]
  69. Setubal MSV, Antonio M, Amaral EM, Boulet J. Improving perinatology residents’ skills in breaking bad news: a randomized intervention study. Rev Bras Ginecol Obstet. Mar 2018;40(3):137-146. [CrossRef] [Medline]
  70. Slort W, Blankenstein AH, Schweitzer BPM, Deliens L, van der Horst HE. Effectiveness of the “availability, current issues and anticipation” (ACA) training programme for general practice trainees on communication with palliative care patients: a controlled trial. Patient Educ Couns. Apr 2014;95(1):83-90. [CrossRef] [Medline]
  71. Smith MM, Secunda KE, Cohen ER, Wayne DB, Vermylen JH, Wood GJ. Clinical experience is not a proxy for competence: comparing fellow and medical student performance in a breaking bad news simulation-based mastery learning curriculum. Am J Hosp Palliat Care. Apr 2023;40(4):423-430. [CrossRef] [Medline]
  72. Smith PEM, Fuller GN, Kinnersley P, Brigley S, Elwyn G. Using simulated consultations to develop communications skills for neurology trainees. Eur J Neurol. Jan 2002;9(1):83-87. [CrossRef] [Medline]
  73. Supiot S, Bonnaud-Antignac A. Using simulated interviews to teach junior medical students to disclose the diagnosis of cancer. J Cancer Educ. 2008;23(2):102-107. [CrossRef] [Medline]
  74. Trent ME, Butz A, Serwint J, Gauda E. Teaching adolescent health & cultural communication through simulation. Journal of Adolescent Health. Feb 2015;56(2):S44. [CrossRef]
  75. Van Rossem I, Devroey D, De Paepe K, et al. A training game for students considering family medicine: an educational project report. JMedLife. Oct 2019;12(4):411-418. [CrossRef]
  76. White AA, King AM, D’Addario AE, et al. Crowdsourced feedback to improve resident physician error disclosure skills: a randomized clinical trial. JAMA Netw Open. Aug 1, 2024;7(8):e2425923. [CrossRef] [Medline]
  77. Yuan YY, Scott S, Van Horn N, Oke O, Okada P. Objective evaluation of a simulation course for residents in the pediatric emergency medicine department: breaking bad news. Cureus. Jan 16, 2019;11(1):e3903. [CrossRef] [Medline]
  78. Zick A, Granieri M, Makoul G. First-year medical students’ assessment of their own communication skills: a video-based, open-ended approach. Patient Educ Couns. Oct 2007;68(2):161-166. [CrossRef] [Medline]
  79. Levine O, Myers J, Jyothi Kumar S, et al. Testing the ABCs of serious illness program for oncology trainees: a feasibility trial comparing different learning formats for a virtual communication curriculum. Palliat Med Rep. 2025;6(1):436-445. [CrossRef] [Medline]
  80. Xiao Y, Lian G, Zhang J, et al. Efficacy of a smart glass-enhanced training programme for core doctor-patient communication skills among radiology residents in China. Eur Radiol Exp. Sep 19, 2025;9(1):92. [CrossRef] [Medline]
  81. Abraham R. Turning constraints into opportunities: online delivery of communication skills simulation sessions to undergraduate medical students during the COVID-19 pandemic. pie. 2021;39(4):57-71. URL: https://journals.ufs.ac.za/index.php/pie/issue/view/464 [CrossRef]
  82. Afonso N, Kelekar A, Alangaden A. “I have a cough”: an interactive virtual respiratory case-based module. MedEdPORTAL. Dec 17, 2020;16(101714390):11058. [CrossRef] [Medline]
  83. Aluce LM, Cooper JJ, Emlet LL, et al. Bringing competency-based communication training to scale: a multi-institutional virtual simulation-based mastery learning curriculum for emergency medicine residents. Med Teach. Mar 2025;47(3):505-512. [CrossRef] [Medline]
  84. Bittner A, Bittner J, Jonietz A, Dybowski C, Harendza S. Translating medical documents improves students’ communication skills in simulated physician-patient encounters. BMC Med Educ. Feb 27, 2016;16(1):72. [CrossRef] [Medline]
  85. Booth E, McFetridge K, Ferguson E, Paton C. Teaching undergraduate medical students virtual consultation skills: a mixed-methods interventional before-and-after study. BMJ Open. Jun 16, 2022;12(6):e055235. [CrossRef] [Medline]
  86. Bramstedt KA, Prang M, Dave S, Shin PNH, Savy A, Fatica RA. Telemedicine as an ethics teaching tool for medical students within the nephrology curriculum. Prog Transplant. Sep 2014;24(3):294-297. [CrossRef] [Medline]
  87. Clever SL, Novack DH, Cohen DG, Levinson W. Evaluating surgeons’ informed decision making skills: pilot test using a videoconferenced standardised patient. Med Educ. Dec 2003;37(12):1094-1099. [CrossRef] [Medline]
  88. Daetwyler CJ, Cohen DG, Gracely E, Novack DH. eLearning to enhance physician patient communication: a pilot test of “doc.com” and “WebEncounter” in teaching bad news delivery. Med Teach. 2010;32(9):e381-e390. [CrossRef] [Medline]
  89. Deming J, Horecki P, Brustad R, et al. A virtual communication workshop to increase confidence using telehealth modalities. WMJ. May 2024;123(2):124-126. [Medline]
  90. Godoy-Pozo J, Illesca Pretty M, Vidal Villa A, et al. Remote simulation with simulated patient: Initial clinical learning experience in medical students. Rev Med Chil. Nov 2023;151(11):1446-1455. [CrossRef] [Medline]
  91. Gür D, Offergeld C, Fabry G, Wünsch A. Communication training in otorhinolaryngology education: comparison of an online and a classroom-based training course. HNO. May 2024;72(5):334-340. [CrossRef] [Medline]
  92. Hayes JR, Ark T, Ruffalo L, Mumm B, Nowik J. “Mastering the difficult conversation” communications course: utilizing OSCEs and workshops to prepare learners for residency. PRiMER. 2025;9(101726396):36. [CrossRef] [Medline]
  93. Heller RE, Phillips Z, Wilhite JA, Sartori D, Zabar S, Hayes R. What happens in the Zoom?: a workplace-based assessment of residents’ post-discharge skills in a telemedicine visit with a standardized patient. J Gen Intern Med. 2023;38(Supplement 2). [CrossRef] [Medline]
  94. Holmes RJ, Fischer J, Lowe J, Schell JO, Farouk SS, Sparks MA. Implementation and assessment of virtual standardized patient sessions to teach communication skills to nephrology fellows during COVID-19. J Am Soc Nephrol. 2020;31(448). [CrossRef] [Medline]
  95. Iammeechai W, Srikulmontri T, Siritongtaworn P, Ratta-Apha W. Attitudes and confidence in communication skills of fourth-year medical students after online small group discussion and peer role-play: a survey study. Acad Psychiatry. Apr 2025;49(2):136-141. [CrossRef] [Medline]
  96. Jones S, McNeil M, Rothenberger SD, Jeong K, Nikiforova T. Training internal medicine residents to perform telemedicine visits: a novel skill-based curriculum. MedEdPORTAL. 2025;21(101714390):11540. [CrossRef] [Medline]
  97. Khawand-Azoulai M, Kavensky E, Sanchez J, et al. An authentic learning experience for medical students on conducting a family meeting. Am J Hosp Palliat Care. Sep 2025;42(9):882-888. [CrossRef] [Medline]
  98. Knie K, Schwarz L, Frehle C, Schulte H, Taetz-Harrer A, Kiessling C. To Zoom or not to Zoom - the training of communicative competencies in times of COVID 19 at Witten/Herdecke University illustrated by the example of “sharing information”. GMS J Med Educ. 2020;37(7):Doc83. [CrossRef] [Medline]
  99. Lenes A, Klasen M, Adelt A, et al. Crisis as a chance. A digital training of social competencies with simulated persons at the Medical Faculty of RWTH Aachen, due to the lack of attendance teaching in the SARS-Cov-2 pandemic. GMS J Med Educ. 2020;37(7):Doc82. [CrossRef] [Medline]
  100. Mack MC, Claxton RN, Arnold RM, Maurer SH. Peds OncoTalk: a curriculum to teach communication skills to pediatric hematology/oncology fellows. Pediatr Blood Cancer. Nov 2025;72(11):e31912. [CrossRef] [Medline]
  101. Newcomb AB, Duval M, Bachman SL, Mohess D, Dort J, Kapadia MR. Building rapport and earning the surgical patient’s trust in the era of social distancing: teaching patient-centered communication during video conference encounters to medical students. J Surg Educ. 2021;78(1):336-341. [CrossRef] [Medline]
  102. Newcomb AB, Appelbaum RD, Kapadia M, et al. Implementation of a skills-based virtual communication curriculum for medical students interested in surgery. Global Surg Educ. 2022;1(1):48. [CrossRef] [Medline]
  103. Pang JH, Finlay E, Fortner S, Pickett B, Wang ML. Teaching effective informed consent communication skills in the virtual surgical clerkship. J Am Coll Surg. Jul 2021;233(1):64-72. [CrossRef] [Medline]
  104. Phillips Z, Wong L, Crotty K, et al. Implementing an experiential telehealth training and needs assessment for residents and faculty at a Veterans Affairs primary care clinic. J Grad Med Educ. Aug 2023;15(4):456-462. [CrossRef] [Medline]
  105. Pozo P, Landino MC, Maga JM, et al. Situational awareness in telehealth: a virtual standardized patient case for transitioning preclinical to clinical medical students. MedEdPORTAL. 2025;21(101714390):11517. [CrossRef] [Medline]
  106. Rasalam R, Bandaranaike S. Virtual WIL clinics in medicine: overcoming the COVID-19 challenge. Int J Work Integr Learn. 2020;21(5):573-585. URL: https://www.ijwil.org/files/IJWIL_21_5_573_585.pdf
  107. Rivet EB, Feldman M, Khandelwal S, et al. Adapting compassionate conversations for virtual mediated communication. J Surg Educ. Sep 2023;80(9):1296-1301. [CrossRef] [Medline]
  108. Ruddock K, Herbert K, Neil C, Gajree N, Dempsey K. Immersive psychiatry simulation: a novel course for medical student training. BJPsych Open. Jun 2021;7(S1):S153-S154. URL: https://www.cambridge.org/core/product/identifier/BJO_7_S1/type/journal_issue [CrossRef]
  109. Sasnal M, Miller-Kuhlmann R, Merrell SB, et al. Feasibility and acceptability of virtually coaching residents on communication skills: a pilot study. BMC Med Educ. Sep 29, 2021;21(1):513. [CrossRef] [Medline]
  110. Taylor AD, Connolly J, Pearce C. A rural doctor’s telehealth training program during the COVID-19 pandemic. Rural Remote Health. Feb 2024;24(1):8032. [CrossRef] [Medline]
  111. Tsui AL, Chau SWH. Small-group, online, actor-as-instructor clinical interview training: a single-blind, randomised controlled study. East Asian Arch Psychiatry. Dec 2024;34(4):134-140. [CrossRef] [Medline]
  112. Yudkowsky R, Valdes W, Raja S, Kiser R. Assessing residents’ telehealth communication skills using standardised patients. Med Educ. Nov 2011;45(11):1155. [CrossRef] [Medline]
  113. Geng W, Cao J, Hu J, et al. "Face-to-face" is not superior to “face-to-screen”: comparing effects of online and offline communication skills course in postgraduate medical students. Front Med (Lausanne). 2025;12(1685789):1685789. [CrossRef] [Medline]
  114. Andrade AD, Bagri A, Zaw K, Roos BA, Ruiz JG. Avatar-mediated training in the delivery of bad news in a virtual world. J Palliat Med. Dec 2010;13(12):1415-1419. [CrossRef] [Medline]
  115. Bearman M, Cesnik B. Comparing student attitudes to different models of the same virtual patient. Stud Health Technol Inform. 2001;84(Pt 2):1004-1008. [Medline]
  116. Borg A, Jobs B, Huss V, et al. Enhancing clinical reasoning skills for medical students: a qualitative comparison of LLM-powered social robotic versus computer-based virtual patients within rheumatology. Rheumatol Int. Dec 2024;44(12):3041-3051. [CrossRef] [Medline]
  117. Bruen C, Kreiter C, Wade V, Pawlikowska T. Investigating a self-scoring interview simulation for learning and assessment in the medical consultation. Adv Med Educ Pract. 2017;8:353-358. [CrossRef] [Medline]
  118. Carrard V, Bourquin C, Orsini S, Schmid Mast M, Berney A. Virtual patient simulation in breaking bad news training for medical students. Patient Educ Couns. Jul 2020;103(7):1435-1438. [CrossRef] [Medline]
  119. Courteille O, Josephson A, Larsson LO. Interpersonal behaviors and socioemotional interaction of medical students in a virtual clinical encounter. BMC Med Educ. Apr 1, 2014;14(1):64. [CrossRef] [Medline]
  120. Detering K, Silvester W, Corke C, et al. Teaching general practitioners and doctors-in-training to discuss advance care planning: evaluation of a brief multimodality education programme. BMJ Support Palliat Care. Sep 2014;4(3):313-321. [CrossRef] [Medline]
  121. Deladisma AM, Cohen M, Stevens A, et al. Do medical students respond empathetically to a virtual patient? Am J Surg. Jun 2007;193(6):756-760. [CrossRef] [Medline]
  122. Foster A, Chaudhary N, Murphy J, Lok B, Waller J, Buckley PF. The use of simulation to teach suicide risk assessment to health profession trainees-rationale, methodology, and a proof of concept demonstration with a virtual patient. Acad Psychiatry. Dec 2015;39(6):620-629. [CrossRef] [Medline]
  123. Frey-Vogel AS, Ching K, Dzara K, Mallory L. The acceptability of avatar patients for teaching and assessing pediatric residents in communicating medical ambiguity. J Grad Med Educ. Dec 2022;14(6):696-703. [CrossRef] [Medline]
  124. Jacklin S, Maskrey N, Chapman S. Shared decision-making with a virtual patient in medical education: mixed methods evaluation study. JMIR Med Educ. Jun 10, 2021;7(2):e22745. [CrossRef] [Medline]
  125. Kleinsmith A, Rivera-Gutierrez D, Finney G, Cendan J, Lok B. Understanding empathy training with virtual patients. Comput Human Behav. Nov 1, 2015;52(8510313):151-158. [CrossRef] [Medline]
  126. Kron FW, Fetters MD, Scerbo MW, et al. Using a computer simulation for teaching communication skills: a blinded multisite mixed methods randomized controlled trial. Patient Educ Couns. Apr 2017;100(4):748-759. [CrossRef] [Medline]
  127. McCarrick CA, McEntee PD, Boland PA, et al. A randomized controlled trial of a deep language learning model-based simulation tool for undergraduate medical students in surgery. J Surg Educ. Sep 2025;82(9):103629. [CrossRef] [Medline]
  128. McCarthy DM, Formella KT, Ou EZ, et al. There’s an app for that: teaching residents to communicate diagnostic uncertainty through a mobile gaming application. Patient Educ Couns. Jun 2022;105(6):1463-1469. [CrossRef] [Medline]
  129. Mool A, Schmid J, Johnston T, et al. Using generative AI to simulate patient history-taking in a problem-based learning tutorial: a mixed-methods study. Tech Know Learn. 2026. [CrossRef]
  130. Mukadam A, Suresh S, Jacobs C. Beyond traditional simulation: an exploratory study on the effectiveness and acceptability of ChatGPT‑4o advanced voice mode for communication skills practice among medical students. Cureus. May 2025;17(5):e84381. [CrossRef] [Medline]
  131. Poulose P. Evaluating the role of AI-simulated patients compared with peer-to-peer learning models in the enhancement of medical education: is it beyond theoretical functionality? Future Healthcare Journal. Jun 2025;12(2):100398. [CrossRef]
  132. Raafat N, Harbourne AD, Radia K, Woodman MJ, Swales C, Saunders KEA. Virtual patients improve history-taking competence and confidence in medical students. Med Teach. May 2024;46(5):682-688. [CrossRef] [Medline]
  133. Sezer B, Sezer TA. Teaching communication skills with technology: creating a virtual patient for medical students. AJET. 2019;35(5):183-198. [CrossRef]
  134. Thompson CM, Bishop MJ, Dillard TC, et al. Healing health care disparities: development and pilot testing of a virtual reality implicit bias training module for physicians in the context of Black maternal health. Health Commun. Mar 2025;40(3):445-456. [CrossRef] [Medline]
  135. Wang Z, Fan TT, Li ML, Zhu NJ, Wang XC. Feasibility study of using GPT for history-taking training in medical education: a randomized clinical trial. BMC Med Educ. Jul 10, 2025;25(1):1030. [CrossRef] [Medline]
  136. Yamamoto A, Koda M, Ogawa H, et al. Enhancing medical interview skills through AI-simulated patient interactions: nonrandomized controlled trial. JMIR Med Educ. Sep 23, 2024;10:e58753. [CrossRef] [Medline]
  137. Chiu J, Castro B, Ballard I, et al. Exploration of the role of ChatGPT in teaching communication skills for medical students: a pilot study. Med Sci Educ. Aug 2025;35(4):1871-1882. [CrossRef] [Medline]
  138. Comulada WS, Ganz PA, Huang YM, et al. A pilot test of an AI voice-driven simulation with feedback for medical students to practice discussing diagnostic mammogram results With patients. Cureus. Oct 2025;17(10):e95606. [CrossRef] [Medline]
  139. Dávidovics A, Dávidovics K, Hillebrand P, Rendeki S, Németh T, Dávidovics A. Virtual patient simulation to enhance medical students’ clinical communication and decision-making skills: a pilot study. BMC Med Educ. Dec 30, 2025;26(1):171. [CrossRef] [Medline]
  140. Herschbach L, Festl-Wietek T, Stegemann-Philipps C, et al. Evaluation of an AI-based chatbot providing real-time feedback in communication training for mental health care professionals: proof-of-concept observational study. J Med Internet Res. Nov 28, 2025;27:e82818. [CrossRef] [Medline]
  141. Jadoon M, Naushad K, Aman F, Khan Y, Durrani M, Ali S. Virtual patients for communication skills training: a mixed methods evaluation. Cureus. Aug 2025;17(8):e91005. [CrossRef] [Medline]
  142. Lee HY, Kim J, Choi H, et al. Comparing AI chatbot simulation and peer role-play for OSCE preparation: a pilot randomized controlled trial. BMC Med Educ. Nov 24, 2025;25(1):1755. [CrossRef] [Medline]
  143. Suárez-García RX, Chavez-Castañeda Q, Orrico-Pérez R, et al. DIALOGUE: a generative AI-based pre-post simulation study to enhance diagnostic communication in medical students through virtual type 2 diabetes scenarios. Eur J Investig Health Psychol Educ. Aug 7, 2025;15(8):152. [CrossRef] [Medline]
  144. Tyrrell EG, Sandhu SK, Berry K, et al. Web-based AI-driven virtual patient simulator versus actor-based simulation for teaching consultation skills: multicenter randomized crossover study. JMIR Form Res. Nov 20, 2025;9:e71667. [CrossRef] [Medline]
  145. Young DG, Herrmann LE, Adeyanju O, et al. Remote VR supports medical students’ communication. Clin Teach. Dec 2025;22(6):41236383. [CrossRef] [Medline]
  146. Ba H, Zhang L, Yi Z. Enhancing clinical skills in pediatric trainees: a comparative study of ChatGPT-assisted and traditional teaching methods. BMC Med Educ. May 22, 2024;24(1):558. [CrossRef] [Medline]
  147. Cheloff AZ, Johnson GM, Joseph NP, Fernandez L, Cluett JL, Kriegel GR, et al. Engaging medical students in communication with primary care patients through the patient portal: lessons during COVID-19. J Gen Intern Med. 2021;36(SUPPL 1). [CrossRef] [Medline]
  148. Herrmann-Werner A, Loda T, Zipfel S, Holderried M, Holderried F, Erschens R. Evaluation of a language translation app in an undergraduate medical communication course: proof-of-concept and usability study. JMIR Mhealth Uhealth. Dec 2, 2021;9(12):e31559. [CrossRef] [Medline]
  149. Sun C, Zou J, Zhao L, et al. New doctor-patient communication learning software to help interns succeed in communication skills. BMC Med Educ. Jan 8, 2020;20(1):8. [CrossRef] [Medline]
  150. White AA, King AM, D’Addario AE, et al. Effects of practicing with and obtaining crowdsourced feedback from the video-based communication assessment app on resident physicians’ adverse event communication skills: pre-post trial. JMIR Med Educ. Oct 3, 2022;8(4):e40758. [CrossRef] [Medline]
  151. Dickerson R, Johnsen K, Raij A, et al. Virtual patients: assessment of synthesized versus recorded speech. Stud Health Technol Inform. 2006;119:114-119. [Medline]
  152. Cimini A. 15th National Congress of the Italian Association of Nuclear Medicine and Molecular Imaging (AIMN). Clin Transl Imaging. 2022;10(S1):1-111. [CrossRef]
  153. Sanson-Fisher R, Hobden B, Waller A, Dodd N, Boyd L. Methodological quality of teaching communication skills to undergraduate medical students: a mapping review. BMC Med Educ. Jun 27, 2018;18(1):151. [CrossRef] [Medline]
  154. Liao F, Murphy D, Wu JC, Chen CY, Chang CC, Tsai PF. How technology-enhanced experiential e-learning can facilitate the development of person-centred communication skills online for health-care students: a qualitative study. BMC Med Educ. Jan 25, 2022;22(1):60. [CrossRef] [Medline]
  155. Car J, Ong QC, Erlikh Fox T, et al. The digital health competencies in medical education framework: an international consensus statement based on a Delphi study. JAMA Netw Open. Jan 2, 2025;8(1):e2453131. [CrossRef] [Medline]
  156. Kelly S, Smyth E, Murphy P, Pawlikowska T. A scoping review: virtual patients for communication skills in medical undergraduates. BMC Med Educ. Jun 3, 2022;22(1):429. [CrossRef] [Medline]
  157. Bylund CL, Vasquez TS, Peterson EB, et al. Effect of experiential communication skills education on graduate medical education trainees’ communication behaviors: a systematic review. Acad Med. Dec 1, 2022;97(12):1854-1866. [CrossRef] [Medline]
  158. Dorrestein L, Ritter C, De Mol Z, et al. Validity evidence for communication skills assessment in health professions education: a scoping review. BMJ Open. Sep 5, 2025;15(9):e096799. [CrossRef] [Medline]
  159. Tan XH, Foo MA, Lim SLH, et al. Teaching and assessing communication skills in the postgraduate medical setting: a systematic scoping review. BMC Med Educ. Sep 9, 2021;21(1):483. [CrossRef] [Medline]
  160. Blackmore A, Kasfiki EV, Purva M. Simulation-based education to improve communication skills: a systematic review and identification of current best practice. BMJ Simul Technol Enhanc Learn. 2018;4(4):159-164. [CrossRef] [Medline]
  161. Gutiérrez Hernández A, Pompa Mansilla M, Vadillo Bueno G, Sánchez-Mendiola M. Digital learning in motion: exploring mobile device use in Mexican residents. J Grad Med Educ. Dec 2025;17(6):705-712. [CrossRef] [Medline]
  162. Jarvis P. Towards a Comprehensive Theory of Human Learning. Routledge; 2012. [CrossRef]
  163. Illeris K. Transformative Learning and Identity. 2013. [CrossRef] ISBN: 9780203795286
  164. Dayan E, Cohen LG. Neuroplasticity subserving motor skill learning. Neuron. Nov 3, 2011;72(3):443-454. [CrossRef] [Medline]


AI: artificial intelligence
CINAHL: Cumulative Index to Nursing and Allied Health Literature
CST: communication skills training
ERIC: Educational Resource Information Center
JBI: Joanna Briggs Institute
PRISMA-S: Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Literature Searches
PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews
VP: virtual patient


Edited by Stefano Brini; submitted 03.Nov.2025; peer-reviewed by Rakesh Patel, Richard Knox; final revised version received 17.Mar.2026; accepted 17.Mar.2026; published 20.Apr.2026.

Copyright

© Princella Seripenah, Heidi Emery, Bakula Patel, Edward Tyrrell, Julie Carson, Jo Leonardi-Bee, Catrin Evans, Emma Wilson, Jaspal Taggar. Originally published in JMIR Medical Education (https://mededu.jmir.org), 20.Apr.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.